4.1 Main effect of labels on desire for plant-based foods
4.1.1 Confirmatory
First, I build the mixed-effects model for desire, predicted by label_type for plant-based foods only. I follow a maximum random effects structure. We counterbalanced which food was assigned to what label type across participants. Therefore, we will model label_type as random slope for the food grouping.
pp also contains each combination of label_type because the experiment was within-subjects. That means we need to include the effect of label_type as a random slope nested in pp. I use sum-to-zero contrasts because they help with model convergence.
The model has problems converging and gives a singular fit warning. It’s not clear where the model hits the boundary, as there are no random effects that are close to zero or one, except the correlation of the food intercept with label type 1, which is one. Inspecting the thetas shows that the random slope of one label is estimated very close to zero.
# set contrasts
options(contrasts = c("contr.sum", "contr.poly"))
# get a plants-only data set
desire_plants <-
desire %>%
filter(food_type == "plant_based") %>%
mutate(food_type = droplevels(food_type))
# construct model
h1_desire_model <- lme4::lmer(
rating1 ~
label_type +
(1 + label_type | pp) +
(1 + label_type | food),
desire_plants,
control = lmerControl(
optCtrl=list(maxfun=1e9)
)
)
# inspect model
summary(h1_desire_model)
## Linear mixed model fit by REML ['lmerMod']
## Formula: rating1 ~ label_type + (1 + label_type | pp) + (1 + label_type |
## food)
## Data: desire_plants
## Control: lmerControl(optCtrl = list(maxfun = 1e+09))
##
## REML criterion at convergence: 32474.7
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.85324 -0.73507 -0.00855 0.71937 2.71349
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## pp (Intercept) 200.3955 14.1561
## label_type1 43.5035 6.5957 -0.06
## label_type2 13.6059 3.6886 0.24 -0.28
## label_type3 8.7277 2.9543 -0.12 -0.93 0.52
## food (Intercept) 51.8187 7.1985
## label_type1 0.1775 0.4213 1.00
## label_type2 5.9210 2.4333 -0.04 -0.04
## label_type3 7.4198 2.7239 -0.27 -0.27 -0.66
## Residual 717.9239 26.7941
## Number of obs: 3400, groups: pp, 170; food, 20
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 46.7826 1.9952 23.447
## label_type1 0.4616 0.9479 0.487
## label_type2 -1.4559 1.0049 -1.449
## label_type3 -0.2231 1.0276 -0.217
##
## Correlation of Fixed Effects:
## (Intr) lbl_t1 lbl_t2
## label_type1 0.064
## label_type2 0.021 -0.267
## label_type3 -0.142 -0.343 -0.383
## convergence code: 0
## boundary (singular) fit: see ?isSingular
# check smallest parameter
getME(h1_desire_model,"theta")
## pp.(Intercept) pp.label_type1.(Intercept)
## 0.528329415 -0.013809304
## pp.label_type2.(Intercept) pp.label_type3.(Intercept)
## 0.033533032 -0.012960571
## pp.label_type1 pp.label_type2.label_type1
## 0.245775462 -0.037301752
## pp.label_type3.label_type1 pp.label_type2
## -0.103863829 0.128202371
## pp.label_type3.label_type2 pp.label_type3
## 0.034636504 0.001211042
## food.(Intercept) food.label_type1.(Intercept)
## 0.268660763 0.015724315
## food.label_type2.(Intercept) food.label_type3.(Intercept)
## -0.003346533 -0.027194595
## food.label_type1 food.label_type2.label_type1
## 0.000000000 -0.089966750
## food.label_type3.label_type1 food.label_type2
## 0.059725062 0.011925883
## food.label_type3.label_type2 food.label_type3
## -0.065425424 0.041807834
isSingular(h1_desire_model, tol = 1e-5)
## [1] TRUE
Therefore, it’s probably a good idea to remove random correlation within the food factor to see whether that helps with singularity. It doesn’t: Now the model cannot estimate a variance around the slope for label1 within food.
# construct model
h1_desire_model_no_food_corr <- lmer_alt(
rating1 ~
label_type +
(1 + label_type | pp) +
(1 + label_type || food),
desire_plants,
control = lmerControl(
optCtrl=list(maxfun=1e9)
)
)
# inspect model
summary(h1_desire_model_no_food_corr)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: rating1 ~ label_type + (1 + re1.label_type1 + re1.label_type2 +
## re1.label_type3 | pp) + (1 + re2.label_type1 + re2.label_type2 +
## re2.label_type3 || food)
## Data: data
## Control: lmerControl(optCtrl = list(maxfun = 1e+09))
##
## REML criterion at convergence: 32476.5
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.86787 -0.74086 -0.00959 0.72017 2.74209
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## pp (Intercept) 199.920 14.139
## re1.label_type1 43.679 6.609 -0.05
## re1.label_type2 13.403 3.661 0.24 -0.28
## re1.label_type3 8.901 2.983 -0.12 -0.93 0.52
## food (Intercept) 51.869 7.202
## food.1 re2.label_type1 0.000 0.000
## food.2 re2.label_type2 3.177 1.782
## food.3 re2.label_type3 4.010 2.002
## Residual 718.892 26.812
## Number of obs: 3400, groups: pp, 170; food, 20
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 46.7816 1.9952 37.2186 23.447 <2e-16 ***
## label_type1 0.4639 0.9442 167.2478 0.491 0.624
## label_type2 -1.4596 0.9339 32.1702 -1.563 0.128
## label_type3 -0.2218 0.9420 31.1712 -0.235 0.815
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) lbl_t1 lbl_t2
## label_type1 -0.015
## label_type2 0.039 -0.286
## label_type3 -0.016 -0.360 -0.202
## convergence code: 0
## boundary (singular) fit: see ?isSingular
# check smallest parameter
getME(h1_desire_model_no_food_corr,"theta")
## pp.(Intercept) pp.re1.label_type1.(Intercept)
## 0.52734724 -0.01305881
## pp.re1.label_type2.(Intercept) pp.re1.label_type3.(Intercept)
## 0.03287394 -0.01312642
## pp.re1.label_type1 pp.re1.label_type2.re1.label_type1
## 0.24614686 -0.03686695
## pp.re1.label_type3.re1.label_type1 pp.re1.label_type2
## -0.10486987 0.12729734
## pp.re1.label_type3.re1.label_type2 pp.re1.label_type3
## 0.03480369 0.00000000
## food.(Intercept) food.re2.label_type1
## 0.26860976 0.00000000
## food.re2.label_type2 food.re2.label_type3
## 0.06647395 0.07468488
isSingular(h1_desire_model_no_food_corr, tol = 1e-5)
## [1] TRUE
I try removing the random correlations for pp, but that leads to more problems.
# construct model
h1_desire_model_no_pp_corr <- lmer_alt(
rating1 ~
label_type +
(1 + label_type || pp) +
(1 + label_type | food),
desire_plants,
control = lmerControl(
optCtrl=list(maxfun=1e9)
)
)
# inspect model
summary(h1_desire_model_no_pp_corr)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: rating1 ~ label_type + (1 + re1.label_type1 + re1.label_type2 +
## re1.label_type3 || pp) + (1 + re2.label_type1 + re2.label_type2 +
## re2.label_type3 | food)
## Data: data
## Control: lmerControl(optCtrl = list(maxfun = 1e+09))
##
## REML criterion at convergence: 32479.2
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.88349 -0.73716 -0.00159 0.72018 2.70847
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## pp (Intercept) 200.249 14.151
## pp.1 re1.label_type1 23.505 4.848
## pp.2 re1.label_type2 10.802 3.287
## pp.3 re1.label_type3 0.000 0.000
## food (Intercept) 51.810 7.198
## re2.label_type1 1.160 1.077 0.42
## re2.label_type2 4.856 2.204 -0.04 0.71
## re2.label_type3 10.052 3.171 -0.24 -0.97 -0.64
## Residual 723.779 26.903
## Number of obs: 3400, groups: pp, 170; food, 20
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 46.7830 1.9953 37.1913 23.446 <2e-16 ***
## label_type1 0.4584 0.9138 88.7823 0.502 0.617
## label_type2 -1.4565 0.9722 24.9465 -1.498 0.147
## label_type3 -0.2204 1.0684 17.4171 -0.206 0.839
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) lbl_t1 lbl_t2
## label_type1 0.088
## label_type2 -0.014 -0.145
## label_type3 -0.130 -0.388 -0.421
## convergence code: 0
## boundary (singular) fit: see ?isSingular
# check smallest parameter
getME(h1_desire_model_no_pp_corr,"theta")
## pp.(Intercept) pp.re1.label_type1
## 0.5259955059 0.1802098813
## pp.re1.label_type2 pp.re1.label_type3
## 0.1221672029 0.0000000000
## food.(Intercept) food.re2.label_type1.(Intercept)
## 0.2675480464 0.0166371811
## food.re2.label_type2.(Intercept) food.re2.label_type3.(Intercept)
## -0.0028803224 -0.0286267039
## food.re2.label_type1 food.re2.label_type2.re2.label_type1
## 0.0364147398 0.0650750422
## food.re2.label_type3.re2.label_type1 food.re2.label_type2
## -0.1124376913 0.0496599581
## food.re2.label_type3.re2.label_type2 food.re2.label_type3
## 0.0206427629 0.0008847844
isSingular(h1_desire_model_no_pp_corr, tol = 1e-5)
## [1] TRUE
Next, I see whether changing the optimizer can help and whether estimates are stable across optimizers. Not all optimizers converge, which isn’t a good sign. Also, those that do converge show quite some discrepancies between their random effects.
h1_desire_model_all <- afex::all_fit(
h1_desire_model,
maxfun = 1e9
)
## bobyqa. : [OK]
## Nelder_Mead. : [OK]
## optimx.nlminb : [ERROR]
## optimx.L-BFGS-B : [ERROR]
## nloptwrap.NLOPT_LN_NELDERMEAD : [OK]
## nloptwrap.NLOPT_LN_BOBYQA : [OK]
## nmkbw. : [ERROR]
# which ones converged
ok_fits <- sapply(h1_desire_model_all, is, "merMod")
ok_fits
## bobyqa. Nelder_Mead.
## TRUE TRUE
## optimx.nlminb optimx.L-BFGS-B
## FALSE FALSE
## nloptwrap.NLOPT_LN_NELDERMEAD nloptwrap.NLOPT_LN_BOBYQA
## TRUE TRUE
## nmkbw.
## FALSE
# was fit okay?
!sapply(h1_desire_model_all, inherits, "try-error")
## bobyqa. Nelder_Mead.
## TRUE TRUE
## optimx.nlminb optimx.L-BFGS-B
## TRUE TRUE
## nloptwrap.NLOPT_LN_NELDERMEAD nloptwrap.NLOPT_LN_BOBYQA
## TRUE TRUE
## nmkbw.
## TRUE
# compare fixed effects
ok_fits_details <- h1_desire_model_all[ok_fits]
t(sapply(ok_fits_details, "fixef"))
## (Intercept) label_type1 label_type2
## bobyqa. 46.78268 0.4586223 -1.456177
## Nelder_Mead. 46.78375 0.4591444 -1.455460
## nloptwrap.NLOPT_LN_NELDERMEAD 46.78260 0.4616065 -1.455916
## nloptwrap.NLOPT_LN_BOBYQA 46.78260 0.4616065 -1.455916
## label_type3
## bobyqa. -0.2203767
## Nelder_Mead. -0.2220568
## nloptwrap.NLOPT_LN_NELDERMEAD -0.2230934
## nloptwrap.NLOPT_LN_BOBYQA -0.2230934
# compare random effects
t(sapply(ok_fits_details, getME, "theta"))
## pp.(Intercept) pp.label_type1.(Intercept)
## bobyqa. 0.5285398 -0.013668703
## Nelder_Mead. 0.5456959 -0.008990211
## nloptwrap.NLOPT_LN_NELDERMEAD 0.5283294 -0.013809304
## nloptwrap.NLOPT_LN_BOBYQA 0.5283294 -0.013809304
## pp.label_type2.(Intercept)
## bobyqa. 0.03338064
## Nelder_Mead. 0.03540711
## nloptwrap.NLOPT_LN_NELDERMEAD 0.03353303
## nloptwrap.NLOPT_LN_BOBYQA 0.03353303
## pp.label_type3.(Intercept) pp.label_type1
## bobyqa. -0.012696551 0.2445100
## Nelder_Mead. -0.008266452 0.2763200
## nloptwrap.NLOPT_LN_NELDERMEAD -0.012960571 0.2457755
## nloptwrap.NLOPT_LN_BOBYQA -0.012960571 0.2457755
## pp.label_type2.label_type1
## bobyqa. -0.03660130
## Nelder_Mead. -0.01824275
## nloptwrap.NLOPT_LN_NELDERMEAD -0.03730175
## nloptwrap.NLOPT_LN_BOBYQA -0.03730175
## pp.label_type3.label_type1 pp.label_type2
## bobyqa. -0.1027990 0.1280992
## Nelder_Mead. -0.1223092 0.1286608
## nloptwrap.NLOPT_LN_NELDERMEAD -0.1038638 0.1282024
## nloptwrap.NLOPT_LN_BOBYQA -0.1038638 0.1282024
## pp.label_type3.label_type2 pp.label_type3
## bobyqa. 0.034727208 0.000000000
## Nelder_Mead. 0.009490009 0.125855127
## nloptwrap.NLOPT_LN_NELDERMEAD 0.034636504 0.001211042
## nloptwrap.NLOPT_LN_BOBYQA 0.034636504 0.001211042
## food.(Intercept)
## bobyqa. 0.2691873
## Nelder_Mead. 1.6786511
## nloptwrap.NLOPT_LN_NELDERMEAD 0.2686608
## nloptwrap.NLOPT_LN_BOBYQA 0.2686608
## food.label_type1.(Intercept)
## bobyqa. 0.01581504
## Nelder_Mead. -0.14380861
## nloptwrap.NLOPT_LN_NELDERMEAD 0.01572432
## nloptwrap.NLOPT_LN_BOBYQA 0.01572432
## food.label_type2.(Intercept)
## bobyqa. -0.003378140
## Nelder_Mead. 0.571312183
## nloptwrap.NLOPT_LN_NELDERMEAD -0.003346533
## nloptwrap.NLOPT_LN_BOBYQA -0.003346533
## food.label_type3.(Intercept)
## bobyqa. -0.0274849
## Nelder_Mead. -0.2895133
## nloptwrap.NLOPT_LN_NELDERMEAD -0.0271946
## nloptwrap.NLOPT_LN_BOBYQA -0.0271946
## food.label_type1
## bobyqa. 0.03351366
## Nelder_Mead. 0.04151679
## nloptwrap.NLOPT_LN_NELDERMEAD 0.00000000
## nloptwrap.NLOPT_LN_BOBYQA 0.00000000
## food.label_type2.label_type1
## bobyqa. 0.06723607
## Nelder_Mead. -0.47075182
## nloptwrap.NLOPT_LN_NELDERMEAD -0.08996675
## nloptwrap.NLOPT_LN_BOBYQA -0.08996675
## food.label_type3.label_type1
## bobyqa. -0.10983767
## Nelder_Mead. -0.36538859
## nloptwrap.NLOPT_LN_NELDERMEAD 0.05972506
## nloptwrap.NLOPT_LN_BOBYQA 0.05972506
## food.label_type2
## bobyqa. 0.04975216
## Nelder_Mead. 0.79150054
## nloptwrap.NLOPT_LN_NELDERMEAD 0.01192588
## nloptwrap.NLOPT_LN_BOBYQA 0.01192588
## food.label_type3.label_type2
## bobyqa. 0.02194645
## Nelder_Mead. 0.46606050
## nloptwrap.NLOPT_LN_NELDERMEAD -0.06542542
## nloptwrap.NLOPT_LN_BOBYQA -0.06542542
## food.label_type3
## bobyqa. 0.00000000
## Nelder_Mead. 0.20880418
## nloptwrap.NLOPT_LN_NELDERMEAD 0.04180783
## nloptwrap.NLOPT_LN_BOBYQA 0.04180783
It might be time to further simplify the model to get it to converge. Before I begin removing slopes, I’ll first run the model without a random intercept for food and without correlations between random effects. Still there are convergence issues.
# construct model
h1_desire_model_no_intercept <- lmer_alt(
rating1 ~
label_type +
(1 + label_type || pp) +
(0 + label_type || food),
desire_plants,
control = lmerControl(
optCtrl=list(maxfun=1e9)
)
)
summary(h1_desire_model_no_intercept)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: rating1 ~ label_type + (1 + re1.label_type1 + re1.label_type2 +
## re1.label_type3 || pp) + (0 + re2.label_typecontextual +
## re2.label_typehealth_positive + re2.label_typeneutral + re2.label_typesensory ||
## food)
## Data: data
## Control: lmerControl(optCtrl = list(maxfun = 1e+09))
##
## REML criterion at convergence: 32529.3
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.84602 -0.74294 -0.00683 0.71675 2.82184
##
## Random effects:
## Groups Name Variance Std.Dev.
## pp (Intercept) 199.30 14.117
## pp.1 re1.label_type1 23.37 4.834
## pp.2 re1.label_type2 10.65 3.264
## pp.3 re1.label_type3 0.00 0.000
## food re2.label_typecontextual 59.14 7.690
## food.1 re2.label_typehealth_positive 53.24 7.297
## food.2 re2.label_typeneutral 51.60 7.183
## food.3 re2.label_typesensory 62.43 7.901
## Residual 725.73 26.939
## Number of obs: 3400, groups: pp, 170; food, 20
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 46.7811 1.4469 190.4606 32.331 <2e-16 ***
## label_type1 0.4610 1.7217 33.0913 0.268 0.791
## label_type2 -1.4550 1.6559 34.3539 -0.879 0.386
## label_type3 -0.2148 1.6243 33.5805 -0.132 0.896
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) lbl_t1 lbl_t2
## label_type1 0.013
## label_type2 -0.018 -0.319
## label_type3 -0.027 -0.318 -0.304
## convergence code: 0
## boundary (singular) fit: see ?isSingular
isSingular(h1_desire_model_no_intercept, tol = 1e-5)
## [1] TRUE
Because the random slopes for label_type within food seemed to lead to singular fit, I’ll remove that slope. However, that prevents us from generalizing to other foods (if there is an effect) and p-values for the effect of label_type become less conservative. The model doesn’t converge.
# construct model
h1_desire_model_no_slope <- lme4::lmer(
rating1 ~
label_type +
(1 + label_type | pp) +
(1 | food),
desire_plants,
control = lmerControl(
optCtrl=list(maxfun=1e9)
)
)
summary(h1_desire_model_no_slope)
## Linear mixed model fit by REML ['lmerMod']
## Formula: rating1 ~ label_type + (1 + label_type | pp) + (1 | food)
## Data: desire_plants
## Control: lmerControl(optCtrl = list(maxfun = 1e+09))
##
## REML criterion at convergence: 32478.9
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.91258 -0.74349 0.00084 0.72470 2.62357
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## pp (Intercept) 199.582 14.127
## label_type1 42.776 6.540 -0.05
## label_type2 12.993 3.605 0.25 -0.26
## label_type3 9.012 3.002 -0.13 -0.93 0.51
## food (Intercept) 51.614 7.184
## Residual 722.473 26.879
## Number of obs: 3400, groups: pp, 170; food, 20
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 46.7815 1.9918 23.487
## label_type1 0.4642 0.9430 0.492
## label_type2 -1.4600 0.8450 -1.728
## label_type3 -0.2208 0.8311 -0.266
##
## Correlation of Fixed Effects:
## (Intr) lbl_t1 lbl_t2
## label_type1 -0.016
## label_type2 0.044 -0.312
## label_type3 -0.020 -0.408 -0.257
## convergence code: 0
## Model failed to converge with max|grad| = 0.0524254 (tol = 0.002, component 1)
Last, I’ll remove the entire food term. The model still runs into boundary problems.
# construct model
h1_desire_model_no_food <- lme4::lmer(
rating1 ~
label_type +
(1 + label_type | pp),
desire_plants,
control = lmerControl(
optCtrl=list(maxfun=1e9)
)
)
summary(h1_desire_model_no_food)
## Linear mixed model fit by REML ['lmerMod']
## Formula: rating1 ~ label_type + (1 + label_type | pp)
## Data: desire_plants
## Control: lmerControl(optCtrl = list(maxfun = 1e+09))
##
## REML criterion at convergence: 32651.9
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.83348 -0.77891 -0.01034 0.75289 2.64046
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## pp (Intercept) 196.750 14.027
## label_type1 41.009 6.404 -0.06
## label_type2 9.833 3.136 0.23 -0.24
## label_type3 5.443 2.333 -0.09 -0.94 0.48
## Residual 777.391 27.882
## Number of obs: 3400, groups: pp, 170
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 46.7815 1.1773 39.737
## label_type1 0.4750 0.9629 0.493
## label_type2 -1.4709 0.8624 -1.706
## label_type3 -0.2062 0.8473 -0.243
##
## Correlation of Fixed Effects:
## (Intr) lbl_t1 lbl_t2
## label_type1 -0.026
## label_type2 0.060 -0.310
## label_type3 -0.018 -0.382 -0.284
## convergence code: 0
## boundary (singular) fit: see ?isSingular
# check smallest parameter
getME(h1_desire_model_no_food,"theta")
## pp.(Intercept) pp.label_type1.(Intercept)
## 0.503080456 -0.012663962
## pp.label_type2.(Intercept) pp.label_type3.(Intercept)
## 0.026279068 -0.007893868
## pp.label_type1 pp.label_type2.label_type1
## 0.229329438 -0.026060945
## pp.label_type3.label_type1 pp.label_type2
## -0.079397936 0.106201182
## pp.label_type3.label_type2 pp.label_type3
## 0.025205164 0.000000000
isSingular(h1_desire_model_no_food, tol = 1e-5)
## [1] TRUE
The next step would be to remove the slope for pp, but that removal would increase our Type I error chance and lead to dramatically wrong p-values. Instead, I’ll run a repeated-measures ANOVA.
desire_anova <-
aov_ez(
id = "pp",
dv = "rating1",
data = desire_plants,
within = "label_type"
)
anova(desire_anova)
## Anova Table (Type 3 tests)
##
## Response: rating1
## num Df den Df MSE F ges Pr(>F)
## label_type 2.8937 489.04 181.68 1.2535 0.0026495 0.2898
The diagnostics look okay.
# get the residuals and add the pp identifier
anova_residuals <- as_tibble(desire_anova$lm$residuals) %>%
mutate(pp = row_number()) %>%
select(pp, everything())
# inspect residuals
densityplot(scale(anova_residuals$contextual, scale = TRUE))
densityplot(scale(anova_residuals$health_positive, scale = TRUE))
densityplot(scale(anova_residuals$neutral, scale = TRUE))
densityplot(scale(anova_residuals$sensory, scale = TRUE))
4.1.2 Exploratory
The main effect isn’t significant, but I’ll inspect the post-hoc tests out of curiousity: all comparisons are nonsignificant.
desire_anova_posthocs <-
emmeans(
desire_anova,
pairwise ~ label_type
)
desire_anova_posthocs
## $emmeans
## label_type emmean SE df lower.CL upper.CL
## contextual 47.3 1.47 372 44.4 50.1
## health_positive 45.3 1.47 372 42.4 48.2
## neutral 46.6 1.47 372 43.7 49.5
## sensory 48.0 1.47 372 45.1 50.9
##
## Warning: EMMs are biased unless design is perfectly balanced
## Confidence level used: 0.95
##
## $contrasts
## contrast estimate SE df t.ratio p.value
## contextual - health_positive 1.946 1.44 507 1.355 0.5282
## contextual - neutral 0.681 1.44 507 0.474 0.9647
## contextual - sensory -0.727 1.44 507 -0.506 0.9576
## health_positive - neutral -1.265 1.44 507 -0.881 0.8148
## health_positive - sensory -2.673 1.44 507 -1.862 0.2461
## neutral - sensory -1.408 1.44 507 -0.981 0.7605
##
## P value adjustment: tukey method for comparing a family of 4 estimates
To quantify evidence for the null, I’ll also run a Bayesian ANOVA. There’s strong evidence for the null. Usually I’d also conduct robustness checks, but there were no outliers in the model diagnostics.
desire_anova_bayes <-
anovaBF(
rating1 ~ label_type + pp,
data = as.data.frame(desire %>% filter(food_type == "plant_based")), whichRandom = "pp"
)
1/desire_anova_bayes
## Bayes factor analysis
## --------------
## [1] pp : 263.0408 ±0.69%
##
## Against denominator:
## rating1 ~ label_type + pp
## ---
## Bayes factor type: BFlinearModel, JZS
Last, I see whether the results are robust to the exclusion of the two alleged vegetarians: the results don’t change.
desire_anova_no_vegetarians <-
aov_ez(
id = "pp",
dv = "rating1",
data = desire_plants %>%
filter(!pp %in% c("pp_77", "pp_87")),
within = "label_type"
)
anova(desire_anova)
## Anova Table (Type 3 tests)
##
## Response: rating1
## num Df den Df MSE F ges Pr(>F)
## label_type 2.8937 489.04 181.68 1.2535 0.0026495 0.2898